22 research outputs found

    Black-box Mixed-Variable Optimisation using a Surrogate Model that Satisfies Integer Constraints

    Full text link
    A challenging problem in both engineering and computer science is that of minimising a function for which we have no mathematical formulation available, that is expensive to evaluate, and that contains continuous and integer variables, for example in automatic algorithm configuration. Surrogate-based algorithms are very suitable for this type of problem, but most existing techniques are designed with only continuous or only discrete variables in mind. Mixed-Variable ReLU-based Surrogate Modelling (MVRSM) is a surrogate-based algorithm that uses a linear combination of rectified linear units, defined in such a way that (local) optima satisfy the integer constraints. This method outperforms the state of the art on several synthetic benchmarks with up to 238 continuous and integer variables, and achieves competitive performance on two real-life benchmarks: XGBoost hyperparameter tuning and Electrostatic Precipitator optimisation.Comment: Ann Math Artif Intell (2020

    The Impact of Asynchrony on Parallel Model-Based EAs

    Get PDF
    In a parallel EA one can strictly adhere to the generational clock, and wait for all evaluations in a generation to be done. However, this idle time limits the throughput of the algorithm and wastes computational resources. Alternatively, an EA can be made asynchronous parallel. However, EAs using classic recombination and selection operators (GAs) are known to suffer from an evaluation time bias, which also influences the performance of the approach. Model-Based Evolutionary Algorithms (MBEAs) are more scalable than classic GAs by virtue of capturing the structure of a problem in a model. If this model is learned through linkage learning based on the population, the learned model may also capture biases. Thus, if an asynchronous parallel MBEA is also affected by an evaluation time bias, this could result in learned models to be less suited to solving the problem, reducing performance. Therefore, in this work, we study the impact and presence of evaluation time biases on MBEAs in an asynchronous parallelization setting, and compare this to the biases in GAs. We find that a modern MBEA, GOMEA, is unaffected by evaluation time biases, while the more classical MBEA, ECGA, is affected, much like GAs are.Comment: 9 pages, 3 figures, 3 tables, submitted to GECCO 202

    EXPObench: Benchmarking Surrogate-based Optimisation Algorithms on Expensive Black-box Functions

    Get PDF
    Surrogate algorithms such as Bayesian optimisation are especially designed for black-box optimisation problems with expensive objectives, such as hyperparameter tuning or simulation-based optimisation. In the literature, these algorithms are usually evaluated with synthetic benchmarks which are well established but have no expensive objective, and only on one or two real-life applications which vary wildly between papers. There is a clear lack of standardisation when it comes to benchmarking surrogate algorithms on real-life, expensive, black-box objective functions. This makes it very difficult to draw conclusions on the effect of algorithmic contributions. A new benchmark library, EXPObench, provides first steps towards such a standardisation. The library is used to provide an extensive comparison of six different surrogate algorithms on four expensive optimisation problems from different real-life applications. This has led to new insights regarding the relative importance of exploration, the evaluation time of the objective, and the used model. A further contribution is that we make the algorithms and benchmark problem instances publicly available, contributing to more uniform analysis of surrogate algorithms. Most importantly, we include the performance of the six algorithms on all evaluated problem instances. This results in a unique new dataset that lowers the bar for researching new methods as the number of expensive evaluations required for comparison is significantly reduced.Comment: 13 page

    Solving multi-structured problems by introducing linkage kernels into GOMEA

    Get PDF
    Model-Based Evolutionary Algorithms (MBEAs) can be highly scalable by virtue of linkage (or variable interaction) learning. This requires, however, that the linkage model can capture the exploitable structure of a problem. Usually, a single type of linkage structure is attempted to be captured using models such as a linkage tree. However, in practice, problems may exhibit multiple linkage structures. This is for instance the case in multi-objective optimization when the objectives have different linkage structures. This cannot be modelled sufficiently well when using linkage models that aim at capturing a single type of linkage structure, deteriorating the advantages brought by MBEAs. Therefore, here, we introduce linkage kernels, whereby a linkage structure is learned for each solution over its local neighborhood. We implement linkage kernels into the MBEA known as GOMEA that was previously found to be highly scalable when solving various problems. We further introduce a novel benchmark function called Best-of-Traps (BoT) that has an adjustable degree of different linkage structures. On both BoT and a worst-case scenario-based variant of the well-known MaxCut problem, we experimentally find a vast performance improvement of linkage-kernel GOMEA over GOMEA with a single linkage tree as well as the MBEA known as DSMGA-II

    On the impact of linkage learning, gene-pool optimal mixing, and non-redundant encoding on permutation optimization

    Get PDF
    Gene-pool Optimal Mixing Evolutionary Algorithms (GOMEAs) have been shown to achieve state-of-the-art results on various types of optimization problems with various types of problem variables. Recently, a GOMEA for permutation spaces was introduced by leveraging the random keys encoding, obtaining promising first results on permutation flow shop instances. A key cited strength of GOMEAs is linkage learning, i.e., the ability to determine and leverage, during optimization, key dependencies between problem variables. However, the added value of linkage learning was not tested in depth for permutation GOMEA. Here, we introduce a new version of permutation GOMEA, called qGOMEA, that works directly in permutation space, removing the redundancy of using random keys. We additionally consider various linkage information sources, including random noise, in both GOMEA variants, and compare performance with various classic genetic algorithms on a wider range of problems than considered before. We find that, although the benefits of linkage learning are clearly visible for various artificial benchmark problems, this is far less the case for various real-world inspired problems. Finally, we find that qGOMEA performs best, and is more applicable to a wider range of permutation problems

    Encoding OAS: Solving order acceptance and scheduling with modified black box approaches

    No full text
    In many real world scheduling problems there exist hard deadlines after which tasks can no longer be performed. Conversely, not all tasks are necessarily required to be scheduled. Furthermore, the problem investigated in this thesis includes sequence dependent setup times, an aspect reminiscent of the Travelling Salesperson problem. These elements are the underlying additions of the Order Acceptance and Scheduling with Sequence Dependent Setup Times, and introduce additional difficulties to the problem of scheduling. Given the complexity of this problem, it is surprising no evaluation of black-box approaches has been performed. This thesis investigates the applicability, strengths and weaknesses of black-box optimization approaches to various instances of this problem. Two hints were provided in order to improve performance initially. The Time hint informs the approaches of the completion time of the last order in the evaluated schedule, while the Bounds hint leaks information about the release time and deadlines of each order. Both have notable up- and downsides with regards to performance. For the identified weaknesses improvements to the most promising approach, Permutation GOMEA, are proposed and evaluated. These improvements include the addition of various local search approaches, including one derived from an exact approach. The most notable result is a new approach based of GOMEA, named qGOMEA, which shows performance beyond that of any of the evaluated algorithms, including the current state-of-the-art of the Order Acceptance and Scheduling problem. Furthermore, applicability to other permutation problems was shown by improved performance on benchmark functions

    Hospital simulation model optimisation with a random ReLU expansion surrogate model

    Get PDF
    The industrial challenge of the GECCO 2021 conference is an expensive optimisation problem, where the parameters of a hospital simulation model need to be tuned to optimality. We show how a surrogate-based optimisation framework, with a random ReLU expansion as the surrogate model, outperforms other methods such as Bayesian optimisation, Hyperopt, and random search on this problem

    Genetic Permutation Benchmark

    No full text

    On the impact of linkage learning, gene-pool optimal mixing, and non-redundant encoding on permutation optimization

    No full text
    Gene-pool Optimal Mixing Evolutionary Algorithms (GOMEAs) have been shown to achieve state-of-the-art results on various types of optimization problems with various types of problem variables. Recently, a GOMEA for permutation spaces was introduced by leveraging the random keys encoding, obtaining promising first results on permutation flow shop instances. A key cited strength of GOMEAs is linkage learning, i.e., the ability to determine and leverage, during optimization, key dependencies between problem variables. However, the added value of linkage learning was not tested in depth for permutation GOMEA. Here, we introduce a new version of permutation GOMEA, called qGOMEA, that works directly in permutation space, removing the redundancy of using random keys. We additionally consider various linkage information sources, including random noise, in both GOMEA variants, and compare performance with various classic genetic algorithms on a wider range of problems than considered before. We find that, although the benefits of linkage learning are clearly visible for various artificial benchmark problems, this is far less the case for various real-world inspired problems. Finally, we find that qGOMEA performs best, and is more applicable to a wider range of permutation problems

    Solving multi-structured problems by introducing linkage kernels into GOMEA

    No full text
    Model-Based Evolutionary Algorithms (MBEAs) can be highly scalable by virtue of linkage (or variable interaction) learning. This requires, however, that the linkage model can capture the exploitable structure of a problem. Usually, a single type of linkage structure is attempted to be captured using models such as a linkage tree. However, in practice, problems may exhibit multiple linkage structures. This is for instance the case in multi-objective optimization when the objectives have different linkage structures. This cannot be modelled sufficiently well when using linkage models that aim at capturing a single type of linkage structure, deteriorating the advantages brought by MBEAs. Therefore, here, we introduce linkage kernels, whereby a linkage structure is learned for each solution over its local neighborhood. We implement linkage kernels into the MBEA known as GOMEA that was previously found to be highly scalable when solving various problems. We further introduce a novel benchmark function called Best-of-Traps (BoT) that has an adjustable degree of different linkage structures. On both BoT and a worst-case scenario-based variant of the well-known MaxCut problem, we experimentally find a vast performance improvement of linkage-kernel GOMEA over GOMEA with a single linkage tree as well as the MBEA known as DSMGA-II
    corecore